impact statement
This man was killed four years ago. His AI clone just spoke in court.
People just can't stop using generative AI tools in legal proceedings, despite repeated pushback from frustrated judges. While AI initially appeared in courtrooms through bogus "hallucinated" cases the trend has taken a turn--driven by increasingly sophisticated AI video and audio tools. In some instances, AI is even being used to seemingly bring victims back from the dead. This week, a crime victim's family presented a brief video in an Arizona courtroom depicting an AI version of 37-year-old Chris Pelkey. Pelkey was shot and killed in 2021 in a road rage incident. Now, four years later, the AI-generated "clone" appeared to address his alleged killer in court.
- North America > United States > Arizona (0.25)
- North America > United States > New York (0.07)
- North America > United States > Colorado (0.05)
- Law > Litigation (0.96)
- Government > Regional Government > North America Government > United States Government (0.31)
Review for NeurIPS paper: Accelerating Reinforcement Learning through GPU Atari Emulation
Weaknesses: My main concern is that results seem to be contradictory to what the authors claimed as the benefit of leveraging GPU accelerations. Specifically, in the "impact statement" the authors described CuLE can "provide access to an accelerated training environment to researchers with limited computational capabilities," but the results show the acceleration won't take into effect unless you use more computation---Figure 2, CuLE runs slower than OpenAI when using a fewer number of environments. If someone can only afford to run 100 environments, would this mean CuLE is not useful here? The limitation of the memory has been noted in the paper which is good. I was confused when looking at Table 3. First, why is there no 120 envs experiment for CuLE?
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.36)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.36)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.36)
Review for NeurIPS paper: Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks
Additional Feedback: I'd be interested to hear whether the proposed approach could also benefit from an attention mechanism similar to GAT. I wasn't entirely sure about the setup for the experiment where the training size is reduced. Is this taking a fixed graph and then simply hiding an increasing portion of the node labels, or is the graph structure different between the settings with reduced training size? Are the number of nodes for which labels are predicted the same between each setting? Is each unlabelled node always connected to at least one labelled node or does the reduction of training size also mean that the nearest labelled node might be further away in the low training size regime?
Indiana woman sentenced to prison after defrauding 96-year-old widower out of nearly 80,000
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. An Indiana woman has been sentenced to three years in federal prison after she used a dating app to scam a 96-year-old man out of nearly 80,000, a U.S. attorney announced Wednesday. Brittany Rakia Shawnai Lasley, 34, of Anderson, created a social media account containing fake profile information on the dating site "Plenty of Fish" and used the account to perpetrate an online romance with the man, who was a windower, according to U.S. Attorney Zachary Cunha. Over time, Lasley persuaded the 96-year-old to send her money, gift cards, credit cards and even to hand over sensitive banking information.
- South America > Colombia (0.06)
- North America > United States > Rhode Island > Kent County > Coventry (0.06)
- North America > United States > Indiana > Madison County > Anderson (0.06)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia (0.04)
Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue
Greene, Travis, Dhurandhar, Amit, Shmueli, Galit
In response to growing recognition of the social impact of new AI-based technologies, major AI and ML conferences and journals now encourage or require papers to include ethics impact statements and undergo ethics reviews. This move has sparked heated debate concerning the role of ethics in AI research, at times devolving into name-calling and threats of "cancellation." We diagnose this conflict as one between atomist and holist ideologies. Among other things, atomists believe facts are and should be kept separate from values, while holists believe facts and values are and should be inextricable from one another. With the goal of reducing disciplinary polarization, we draw on numerous philosophical and historical sources to describe each ideology's core beliefs and assumptions. Finally, we call on atomists and holists within the ever-expanding data science community to exhibit greater empathy during ethical disagreements and propose four targeted strategies to ensure AI research benefits society.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (6 more...)
- Government (0.93)
- Social Sector (0.88)
- Information Technology > Security & Privacy (0.68)
- (3 more...)
AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements
Ashurst, Carolyn, Hine, Emmie, Sedille, Paul, Carlier, Alexis
Ethics statements have been proposed as a mechanism to increase transparency and promote reflection on the societal impacts of published research. In 2020, the machine learning (ML) conference NeurIPS broke new ground by requiring that all papers include a broader impact statement. This requirement was removed in 2021, in favour of a checklist approach. The 2020 statements therefore provide a unique opportunity to learn from the broader impact experiment: to investigate the benefits and challenges of this and similar governance mechanisms, as well as providing an insight into how ML researchers think about the societal impacts of their own work. Such learning is needed as NeurIPS and other venues continue to question and adapt their policies. To enable this, we have created a dataset containing the impact statements from all NeurIPS 2020 papers, along with additional information such as affiliation type, location and subject area, and a simple visualisation tool for exploration. We also provide an initial quantitative analysis of the dataset, covering representation, engagement, common themes, and willingness to discuss potential harms alongside benefits. We investigate how these vary by geography, affiliation type and subject area. Drawing on these findings, we discuss the potential benefits and negative outcomes of ethics statement requirements, and their possible causes and associated challenges. These lead us to several lessons to be learnt from the 2020 requirement: (i) the importance of creating the right incentives, (ii) the need for clear expectations and guidance, and (iii) the importance of transparency and constructive deliberation. We encourage other researchers to use our dataset to provide additional analysis, to further our understanding of how researchers responded to this requirement, and to investigate the benefits and challenges of this and related mechanisms.
- South America (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Africa (0.04)
- (9 more...)
RAFT: A Real-World Few-Shot Text Classification Benchmark
Alex, Neel, Lifland, Eli, Tunstall, Lewis, Thakur, Abhishek, Maham, Pegah, Riedel, C. Jess, Hine, Emmie, Ashurst, Carolyn, Sedille, Paul, Carlier, Alexis, Noetel, Michael, Stuhlmüller, Andreas
Large pre-trained language models have shown promise for few-shot learning, completing text-based tasks given only a few task-specific examples. Will models soon solve classification tasks that have so far been reserved for human research assistants? Existing benchmarks are not designed to measure progress in applied settings, and so don't directly answer this question. The RAFT benchmark (Real-world Annotated Few-shot Tasks) focuses on naturally occurring tasks and uses an evaluation setup that mirrors deployment. Baseline evaluations on RAFT reveal areas current techniques struggle with: reasoning over long texts and tasks with many classes. Human baselines show that some classification tasks are difficult for non-expert humans, reflecting that real-world value sometimes depends on domain expertise. Yet even non-expert human baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasets and leaderboard will track which model improvements translate into real-world benefits at https://raft.elicit.org .
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > North Carolina (0.04)
- (3 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- (2 more...)
Institutionalising Ethics in AI through Broader Impact Requirements
Prunkl, Carina, Ashurst, Carolyn, Anderljung, Markus, Webb, Helena, Leike, Jan, Dafoe, Allan
Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this article, we reflect on a novel governance initiative by one of the world's largest AI conferences. In 2020, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research. Drawing insights from similar governance initiatives, including institutional review boards (IRBs) and impact requirements for funding applications, we investigate the risks, challenges and potential benefits of such an initiative. Among the challenges, we list a lack of recognised best practice and procedural transparency, researcher opportunity costs, institutional and social pressures, cognitive biases, and the inherently difficult nature of the task. The potential benefits, on the other hand, include improved anticipation and identification of impacts, better communication with policy and governance experts, and a general strengthening of the norms around responsible research. To maximise the chance of success, we recommend measures to increase transparency, improve guidance, create incentives to engage earnestly with the process, and facilitate public deliberation on the requirement's merits and future. Perhaps the most important contribution from this analysis are the insights we can gain regarding effective community-based governance and the role and responsibility of the AI research community more broadly.
- Law (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- (2 more...)